90 research outputs found

    A Conditional Model for Tonal Analysis

    Get PDF

    An Ontology Based Architecture for Translation

    Get PDF

    Integrating a Cognitive Framework for Knowledge Representation and Categorization in Diverse Cognitive Architectures

    Get PDF
    AbstractThis paper describes the rationale followed for the integration of Dual-PECCS, a cognitively-inspired knowledge representation and reasoning system, into two rather different cognitive architectures, such as ACT-R and CLARION. The provided integration shows how the repre-sentational and reasoning mechanisms implemented by our framework may be plausibly applied to computational models of cognition based on different assumptions

    Semantic Coherence Dataset: Speech transcripts

    Get PDF
    The Semantic Coherence Dataset has been designed to experiment with semantic coherence metrics. More specifically, the dataset has been built to the ends of testing whether probabilistic measures, such as perplexity, provide stable scores to analyze spoken language. Perplexity, which was originally conceived as an information-theoretic measure to assess the probabilistic inference properties of language models, has recently been proven to be an appropriate tool to categorize speech transcripts based on semantic coherence accounts. More specifically, perplexity has been successfully employed to discriminate subjects suffering from Alzheimer Disease and healthy controls. Collected data include speech transcripts, intended to investigate semantic coherence at different levels: data are thus arranged into two classes, to investigate intra-subject semantic coherence, and inter-subject semantic coherence. In the former case transcripts from a single speaker can be employed to train and test language models and to explore whether the perplexity metric provides stable scores in assessing talks from that speaker, while allowing to distinguish between two different forms of speech, political rallies and interviews. In the latter case, models can be trained by employing transcripts from a given speaker, and then used to measure how stable the perplexity metric is when computed using the model from that user and transcripts from different users. Transcripts were extracted from talks lasting almost 13 hours (overall 12:45:17 and 120,326 tokens) for the former class; and almost 30 hours (29:47:34 and 252,270 tokens) for the latter one. Data herein can be reused to perform analyses on measures built on top of language models, and more in general on measures that are aimed at exploring the linguistic features of text documents
    • …
    corecore